Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update

Related Vulnerabilities: CVE-2021-3524   CVE-2021-3531   CVE-2021-3979  

Synopsis

Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update

Type/Severity

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

Red Hat Ceph Storage 5.1 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • ceph object gateway: radosgw: CRLF injection (CVE-2021-3524)
  • ceph: RGW unauthenticated denial of service (CVE-2021-3531)
  • ceph: Ceph volume does not honour osd_dmcrypt_key_size (CVE-2021-3979)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es)

These new packages include numerous bug fixes and enhancements. space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.1/html/release_notes/index

All users of Red Hat Ceph Storage are advised to upgrade to these new packages, which provide numerous enhancements and bug fixes.

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux for x86_64 8 x86_64
  • Red Hat Ceph Storage (OSD) 5 x86_64
  • Red Hat Ceph Storage (MON) 5 x86_64

Fixes

  • BZ - 1259160 - [RFE] SNMP support for RHCS cluster components
  • BZ - 1494059 - [RFE] Add support for dynamic sharding in RGW Multisite
  • BZ - 1654660 - [RFE] Colocation of different Ceph daemons on containerized deployment
  • BZ - 1728344 - Customer DR metadata sync status falls behind(stuck)
  • BZ - 1765484 - RGW does not support '_' symbol in S3 metadata records.
  • BZ - 1821249 - [RFE][dashboard] Display Grafana dashboard for HAProxy used for RGW endpoints
  • BZ - 1835563 - MON crash - src/mon/Monitor.cc: 267: FAILED ceph_assert(session_map.sessions.empty())
  • BZ - 1842808 - [RFE] : Configuration support of nfs on rgw using cephadm
  • BZ - 1857447 - ceph df detail reports dirty objects without a cache tier
  • BZ - 1858720 - [RFE] Vault Data Key API
  • BZ - 1886120 - ceph orch host rm <host> is not stopping the services deployed in the respective removed hosts
  • BZ - 1890109 - [rbd_support] passing invalid interval removes entire schedule
  • BZ - 1890113 - [5.0] Ceph-Dashboard - Device health status is not getting listed under hosts section in 5.0 dashboard
  • BZ - 1900127 - PG state deep-scrub+repair although no inconsistent PG
  • BZ - 1901644 - [cephadm] 5.0 - bootstrap logs are not getting captured fully in a cephadm.log due to less file size
  • BZ - 1905470 - [RGW] [boto] PUT on versioned bucket fails with NoSuchKey
  • BZ - 1915362 - [cee/sd][MGR][insights] the insights command is logging into ceph.audit.log excessively - "[{"prefix":"config-key set","key":"mgr/insights/health_history/ ...
  • BZ - 1921204 - [RFE]increase HTTP headers size in beast.
  • BZ - 1926629 - [RFE] Configure the IP address for the monitoring stack components
  • BZ - 1936370 - ceph-dashboard [RFE] Display users current quota usage
  • BZ - 1936415 - [GSS][RFE] notification: cannot delete a notification from a deleted bucket
  • BZ - 1936887 - [RFE][Cephadm]: rgw-ha daemon in cephadm should allow choice of RGWs to be covered by HAproxy
  • BZ - 1939064 - rgw multipart uploads are failing.
  • BZ - 1939959 - [Ceph-Dashboard] Bucket creation fails when selecting locking with certain values.
  • BZ - 1940813 - rgw-orphan-list tool consuming more space than expected 5.1
  • BZ - 1942510 - [GSS][rgw] return result 405 MethodNotAllowed for unknown resources
  • BZ - 1942593 - [ceph-dashboard]- Bucket name constraints
  • BZ - 1943494 - ceph orch ps fetches wrong status for OSD daemons in maintenance mode
  • BZ - 1943967 - mgr/prometheus should provide additional metrics to support per pool compression workflows
  • BZ - 1944009 - [GSS] "RGWReshardLock::lock failed to acquire lock on reshard.0000000002 ret=-16" messages are reported in rgw log
  • BZ - 1944503 - Command to check per-daemon-events not working
  • BZ - 1944769 - RFE: Provision to apply custom images for monitoring services during bootstrap
  • BZ - 1945583 - [cephadm][ceph dashboard][security] need bootstrap cli option to set grafana admin password during cluster bootstrapping
  • BZ - 1946478 - [GSS][ceph-volume]"ceph-volume lvm batch" shows "% of device" as 0% for DB device
  • BZ - 1947024 - [RFE] Need upgrade status field to indicate result
  • BZ - 1947087 - [workload-dfg] Adding MONs via spec file removed bootstrap node
  • BZ - 1947497 - [RFE] Removing MON from bootstrap breaks cluster access
  • BZ - 1949359 - Provide the limit information if the limit count is greater than the vendor/model/path/rotational device count
  • BZ - 1950644 - [Ceph Dashboard] Creation of snapshot for subvol's folder is failing with 500 Internal error
  • BZ - 1951674 - CVE-2021-3524 ceph object gateway: radosgw: CRLF injection
  • BZ - 1952120 - [ceph-dashboard][RGW:] exception seen on creating a bucket with only bucket name
  • BZ - 1953903 - Crush map viewer in dashboard does not displays OSD's which are not part of the ‘default’ CRUSH root
  • BZ - 1954392 - [RFE] client latency getting impacted after bucket reaching max size quota
  • BZ - 1954971 - Migrate NFS exports from ceph-ansible to mgr/nfs
  • BZ - 1955326 - CVE-2021-3531 ceph: RGW unauthenticated denial of service
  • BZ - 1955513 - [Ceph Dashboard] - Request for refresh button in iscsi targets page
  • BZ - 1956601 - [RADOS]: Global Recovery Event is running continuously with 0 objects in the cluster
  • BZ - 1958758 - [cephadm] - orch - incorrect information about `osd rm stop` in help message
  • BZ - 1958927 - [cephadm] orch upgrade check : if ceph version is lesser than current don't update nodes in needs update
  • BZ - 1959159 - [RBD] Numerous data availability and corruption issues with persistent writeback cache in ssd mode
  • BZ - 1959354 - [cephadm][upgrade] During build to build upgrade got stuck for long hours on single mds host
  • BZ - 1959508 - OSD count mismatched with ceph orch commands
  • BZ - 1962744 - [GSS][ceph-volume]'ceph-volume' ignores the 'bluestore_block_db_size' paramenter provided in ceph.conf
  • BZ - 1963947 - [RFE] Implement wrapper function to run pv/vg/lv commands in host namespace for containerized RHCS
  • BZ - 1964312 - [RFE] : Removing an offline host
  • BZ - 1964453 - [ceph-dashboard][rgw] : Dashboard should the display the realm name and realm id.
  • BZ - 1965186 - Url used to fetch oidc certs is specific to keycloak, make it generic using .well-known/openid-configuration
  • BZ - 1966522 - [ceph dashboard][RFE]: include mfa_ids in User Details section, for a user configured with MFA
  • BZ - 1967122 - [RFE] return cluster fsid from the rgw admin ops API
  • BZ - 1967440 - [ceph-ansible] : cephadm-adopt playbook does not bring the rbd mirroring up once the cluster is migrated from ansible to cephadm
  • BZ - 1968563 - [RGW : NFS] Crash observed for process 'ganesha.nfsd' on performing an upgrade from 4.2GA to 5.0
  • BZ - 1968579 - [RGW : NFS] Post an upgrade from 4.2GA to 5.0, rgw-admin bucket list errors out with "could not init bucket: (5) Input/output error" for the buckets created from the nfs mount.
  • BZ - 1969545 - [GSS][RGW] Bucket reporting negative num_objects value in bucket limit check output
  • BZ - 1970324 - [ceph-dashboard] Regular expression is not working as expected for 'name' field on Create bucket page
  • BZ - 1970549 - [CephFS-Mirror} - False warning of "keyring not found" seen in cephfs-mirror service status is misleading.
  • BZ - 1972274 - [build] [TPS] %{dist} found instead of %{?dist} in: tcmu-runner - Release engineering policy is to use %{?dist}
  • BZ - 1973155 - Enable autotune for osd_memory_target
  • BZ - 1974882 - slow performance on parallel rm operations to the same PVC RWX based on CephFS
  • BZ - 1975338 - [cephadm][ceph-dashboard] Dashboard URL address is incorrect after bootstrap with IPV6 address
  • BZ - 1976874 - [5.0][rgw-multisite][Scale-testing][LC]: Deleting 16.5M objects via LC from the primary, does not delete the respective number of objects from secondary.
  • BZ - 1976920 - [cee/sd][RFE][cephadm] Add support to customize <daemon>-container cpu limit in cephadm
  • BZ - 1979476 - Iscsi gateways are not showing "UP" in dashboard if not collocated with bootstrap nodes
  • BZ - 1979546 - [cephadm] monitoring spec file doesn't support custom port option as per the doc
  • BZ - 1980785 - cephadm: remove iscsi service fails when the dashboard isn't deployed
  • BZ - 1981606 - ceph orch upgrade status - progress message text needs to be fixed
  • BZ - 1981852 - [cephadm] [tracker] : Special host label - _no_schedule
  • BZ - 1982277 - [5.0][rgw]: radosgw-admin datalog trim --shard-id <ID> --end-marker <marker> ends with Segmentation fault
  • BZ - 1982965 - [Ceph Dashboard]: Implement the ability to drain the host
  • BZ - 1982995 - [Ceph Dashboard]: Cluster Creation/Expansion Workflow
  • BZ - 1984368 - [5.0][upgrade][: Cephadm module failed on doing a build to build upgrade with error "'NoneType' object has no attribute 'target_id' "
  • BZ - 1986160 - [RGW] radosgw-admin user stats showing 0 value for "size_utilized" and "size_kb_utilized" fields
  • BZ - 1988274 - [CephFS-Snapshot] - cephfs snapshot schedule status doesn't list the snapshot count properly
  • BZ - 1988287 - [CephFS-Snapshot] - cephfs snapshot created with schedules stopped on nfs volume after creating successfully for 24 hours.
  • BZ - 1990382 - dashboard: integrate Dashboard with mgr/nfs module interface
  • BZ - 1997332 - [RFE] Global on/off flag for PG autoscale feature
  • BZ - 1997964 - [GSS][cephadm][Testathon] Running "cephadm shell --no-hosts" gives error: unrecognized arguments: --no-hosts
  • BZ - 1998009 - [RFE] Cephadm support for installing multiple Co-located RGW containers
  • BZ - 1998010 - [GSS][cephadm] [Testathon] ceph orch ps output does not show "MEM LIMIT"
  • BZ - 2000085 - [RGW][SSE-KMS: Vault] Add Support customed CA certificate from vault KMS for SSE encryption
  • BZ - 2002359 - [cee/sd][ceph-dashboard] Ceph dashboard do not show degraded objects if they are less than 0.5% under "Dashboard->Capacity->Objects block"
  • BZ - 2002428 - whole object recovery instead of partial recovery after osd restart
  • BZ - 2003207 - [Bluestore] Remove the possibility of replay log and file inconsistency
  • BZ - 2005959 - volumes permissions on top-level directories can be incorrect
  • BZ - 2005962 - MDS does not include btime in cap updates
  • BZ - 2006174 - rgw/sts: support for ABAC in AssumeRoleWithWebIdentity
  • BZ - 2006175 - Url used to fetch oidc certs is specific to keycloak, make it generic using .well-known/openid-configuration
  • BZ - 2006178 - rgw/sts: OIDC JWT validation does not use modulus and exponent for token validation. Add support for the same.
  • BZ - 2006184 - rgw/sts: assumed-role: s3api head-object returns 403 Forbidden, even if role has ListBucket, for non-existent object
  • BZ - 2006193 - Session policies restrict permissions granted by Identity based policies and/ or Resource policies. RGW incorrectly evaluates it currently.
  • BZ - 2006194 - Cannot perform server-side copy using STS credentials
  • BZ - 2006217 - [RFE] Add the role being assumed by the user to the RGW opslogs when using STS assumerole
  • BZ - 2006415 - [cee/sd][ceph-ansible] cepadm-adopt.yml playbook fails at: TASK [manage nodes with cephadm]
  • BZ - 2006703 - [RGW] Using LDAP with HDP/S3A unable to put objects
  • BZ - 2006949 - Notify timeout can induce a race condition as it attempts to resend the cache update
  • BZ - 2007298 - cephadm mds upgrade procedure is incomplete
  • BZ - 2007306 - Cannot perform server-side copy using STS credentials in RHCS 5
  • BZ - 2007516 - [RBD]ISCSI- Ceph cluster goes to error state after performing multiple removal and deployment of ISCSI
  • BZ - 2007607 - [cephadm] [tracker] : Removal of deprecated verify-prereqs option from help message
  • BZ - 2008275 - os/bluestore: list obj which equals to pend
  • BZ - 2008587 - [DR] when Relocate action is performed and the Application is deleted completely rbd image is not getting deleted on secondary site
  • BZ - 2008822 - cephadm/upgrade: Upgrade from 5.0 to a 5.1 testfix build fails with error "Module 'cephadm' has failed dependency: invalid syntax (module.py)"
  • BZ - 2008831 - [ceph-dashboard] In Manager Modules table values not aligned with headers
  • BZ - 2008858 - rgw: With policy specifying invalid arn, users can list content of any bucket
  • BZ - 2009315 - [RFE] Need alert name called Daemon Crash
  • BZ - 2009523 - msg/async/ProtocolV2: recv_stamp of a message is set to a wrong value
  • BZ - 2009552 - Progress module optimizations
  • BZ - 2010454 - Adding admin node again with different labels removes ceph.client.admin.keyring and ceph.conf file
  • BZ - 2011456 - [RGW] Malformed eTag follow up fix after bz1995365
  • BZ - 2013176 - TASK [ceph-dashboard : set the rgw user] fails during upgrade from 4x to 5.1
  • BZ - 2013574 - backport mgr/nfs to downstream 5.1
  • BZ - 2014005 - [ceph-dashboard] Create silence not working properly from Active alerts
  • BZ - 2014500 - beast not working on IPv6 failed to parse endpoint=fd00:fd00:fd00:3000::397:808
  • BZ - 2015205 - msgr/async: fix unsafe access in unregister_conn()
  • BZ - 2016380 - mds: crash when journaling during replay
  • BZ - 2017449 - [Ceph-dashboard] Doc link to configure and enable the monitoring functionality for grafana is incorrect
  • BZ - 2017508 - ceph-ansible must allow upgrading to 5.1 with nfs+rgw
  • BZ - 2017620 - rbd-nbd: generate and send device cookie with netlink connect request
  • BZ - 2017621 - rbd: promote rbd-nbd attach and detach at rbd integrated CLI
  • BZ - 2017778 - rgw-multisite/dynamic resharding: Bucket stats reports incorrect number of objects for a bucket after dynamic resharding.
  • BZ - 2017821 - [RFE]: Cephadm NFS cluster creates RGW exports at bucket level only
  • BZ - 2017880 - Bucket sync status for tenanted buckets reports 'failed to read source bucket info: (2) No such file or directory'
  • BZ - 2017992 - OMAP upgrade to PER-PG format result in ill-formatted OMAP keys.
  • BZ - 2018110 - Ceph monitor crash after upgrade from ceph
  • BZ - 2018140 - [dashboard] Allow editing of services
  • BZ - 2018248 - [RGW-NFS] export at user-level crash on readdir
  • BZ - 2018378 - [RFE][Dashboard] Unable to get more details for the popped up notification
  • BZ - 2019978 - OSD might wrongly attempt to use "slow" device when single device is backing the store
  • BZ - 2021095 - [RADOS] "disallowed_leaders" section not displayed in "ceph mon dump" command
  • BZ - 2021177 - [Dashboard] Branding about page version needs to update to 5.1
  • BZ - 2021311 - mds opening connection to up:replay/up:creating daemon causes message drop
  • BZ - 2021387 - rgw: URL-decode S3 and Swift object-copy URLs fix not in nautilus
  • BZ - 2021448 - [Ceph dashboard] Spelling mistake in Network address field
  • BZ - 2021458 - [ceph dashboard] In create host earlier labels not got retained
  • BZ - 2021470 - [ceph dashboard] Daemon events column not aligned properly with table
  • BZ - 2021600 - rgw: deleting and purging a bucket can get stuck in an endless loop 5.1
  • BZ - 2021738 - [RADOS] Have default autoscaler profile as scale-up in RHCS 5.1
  • BZ - 2021926 - [RGW] bucket stats output has incorrect num_objects in rgw.none and rgw.main on multipart upload
  • BZ - 2022052 - Cannot download an object, browser returns "HTTP/1.1 404 Not Found" error with a payload "NoSuchBucket"
  • BZ - 2022190 - stretch mode: blocks kernel rbd and CephFS from mounting
  • BZ - 2022531 - [Workload-DFG] [Single Site] I/O blocked on versioned buckets - looks like dynamic bucket reshard stuck on versioned bucket!
  • BZ - 2023171 - [Rados] Unable to create new pools in 5.1 due to "Error ERANGE: "
  • BZ - 2023377 - rgw: fix `bi put` not using right bucket index shard 5.1
  • BZ - 2023598 - [ceph-dashboard] In Pool overall performance new metrics for per pool compression grafana tabs not exist
  • BZ - 2024029 - rgw: remove prefix & delim params for bucket removal & mp upload abort
  • BZ - 2024154 - MDSMonitor: no active MDS after cluster deployment
  • BZ - 2024176 - [Workload-DFG][CephQE]Cephadm - not able to deploy all the OSD's in a given cluster
  • BZ - 2024788 - CVE-2021-3979 ceph: Ceph volume does not honour osd_dmcrypt_key_size
  • BZ - 2025497 - [rhcs-5.1][rgw-ms][reshard]: bucket stats and bucket list for a bucket failing for a resharded bucket and 's3cmd ls' errors with (NoSuchKey)
  • BZ - 2025800 - [RADOS Stretch cluster] PG's stuck in remapped+peering after deployment of stretch mode
  • BZ - 2025870 - [RBD PWL] reordering of ops and other fixes for persistent memory cache
  • BZ - 2027374 - [RGW] reshard cancel errors with (22) Invalid argument 5.1
  • BZ - 2027446 - Metadata synchronization failed,"metadata is behind on 1 shards" appear
  • BZ - 2027728 - Removal of admin node also acting as a bootstrap node should be unsuccessful
  • BZ - 2028247 - [RFE] Allow processing lifecycle for a single bucket only
  • BZ - 2028416 - [CephFS] File Quota attributes not getting inherited to the cloned volume
  • BZ - 2028477 - [5.1][rgw-multisite][reshard]: Bucket sync is enabled after resharding a bucket where the bucket sync was disabled.
  • BZ - 2029455 - [5.1][RGW] radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --purge-objects failing crashing on buckets having incomplete multiparts
  • BZ - 2029695 - [cee/sd][cephadm] ceph-volume: passed block_db devices: 0 physical, 1 LVM --> ZeroDivisionError: integer division or modulo by zero
  • BZ - 2029778 - [cee/sd][rados] ceph-osd daemon crashed with Segmentation fault in thread 7f79a8cc0700 thread_name:msgr-worker-2
  • BZ - 2030617 - Session policies restrict permissions granted by Identity based policies and/ or Resource policies. RGW incorrectly evaluates it currently.
  • BZ - 2032764 - [RBD]- Failed to start IOs when SSD mode persistent write back cache is enabled in ceph version 16.2.7-3.el8cp
  • BZ - 2032875 - mgr/dashboard: Cluster Expansion - Review Section: fixes and improvements
  • BZ - 2033543 - [ceph-dashboard] - expansion wizard branding issues
  • BZ - 2035490 - dashboard: REST API documentation broken
  • BZ - 2035531 - [RGW] radosgw-admin crashes when we perform a bucket stat operation on a indexless bucket
  • BZ - 2035566 - [RGW] Copy source from a bucket having a public allow policy errors with connection refused
  • BZ - 2037330 - [5.1][rgw-resharding]: user.rgw.acl attribute disappears after a bucket reshard operation.
  • BZ - 2037349 - [5.1][RGW] radosgw-admin bucket check --fix not removing orphaned entries
  • BZ - 2037691 - [4.2 to 5.1 upgrade]: cephadm-adopt fails at task TASK [create rgw export].
  • BZ - 2037768 - [4.2 to 5.1 upgrade]: Observing KeyError: 'eth0', `cephadm list-networks` failed: cephadm exited with an error code: 1
  • BZ - 2037990 - [CEE] Client uploaded S3 object not listed but can be downloaded
  • BZ - 2038036 - [RGW] client.admin crash seen on executing 'bucket radoslist ' command with indexless buckets
  • BZ - 2039276 - CephFS: Failed to create clones if the PVC is filled with smaller size files
  • BZ - 2039413 - [cephadm-ansible][cephadm-clients playbook]: need a newline character on ceph.conf
  • BZ - 2039899 - [RFE][5.1][dynamic-resharding]: Reshard list should suggest a 'tentative' shard value the bucket can be resharded to.
  • BZ - 2040243 - [GSS][Ceph RGW][After a migration from 4.0 to 5.0, RGW container is crashing at 'rados_nobjects_list_next2: Operation not permitted']
  • BZ - 2040528 - pgs wait for read lease after osd start
  • BZ - 2041660 - mds: reset heartbeat in each MDSContext complete()
  • BZ - 2042692 - sync-log-trim thread crashes on empty endpoints, should give error and bail
  • BZ - 2044756 - [Ceph-dashboard] Unable to create snmp-gateway service from dashboard
  • BZ - 2044836 - mon is in CLBO after upgrading to 4.10-113
  • BZ - 2044978 - Orch list services command fails to generate response in yaml format
  • BZ - 2045886 - [5.1] [RGW] client.admin crash seen on executing 'bucket radoslist ' command with indexless buckets
  • BZ - 2048734 - cephadm-adopt playbook fails managing the existing nodes
  • BZ - 2049542 - Need RH branded image for snmp-notifier
  • BZ - 2049851 - mon: osd pool create <pool-name> with --bulk flag
  • BZ - 2050261 - [GSS][RGW] Objects appear in the list-objects output after deletion
  • BZ - 2051525 - Backport rxbounce mapping option
  • BZ - 2051894 - [cli] mark optional positional arguments as such in help output
  • BZ - 2052205 - Data race in RGWDataChangesLog::ChangeStatus
  • BZ - 2052614 - [RFE] Adding a check and blocker for upgrading our current non ODF clusters from 4.x to 5.1
  • BZ - 2052616 - [RFE] Adding a check and blocker for upgrading our current non ODF clusters from 5.0 to 5.1
  • BZ - 2052927 - CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect
  • BZ - 2053645 - diff-iterate reports incorrect offsets in fast-diff mode
  • BZ - 2053652 - diff-iterate include_parent functionality is broken in fast-diff mode
  • BZ - 2057414 - [rbd-mirror] disabling and shortly after re-enabling mirroring on the image can lead to split-brain
  • BZ - 2057496 - [ceph-volume] 'lvm create' must FAIL if specified device has existing partitions
  • BZ - 2057528 - cephadm-adopt Wrong labels applied to the "adopted" nodes
  • BZ - 2058047 - [RFE] [Upgrade to 5.1] Provision of a workaround to enable upgrade from 4x to 5.1 (which is currently blocked) in testing builds in order to unblock QE testing
  • BZ - 2058049 - [RFE] [Upgrade to 5.1] Provision of a workaround to enable upgrade from 5.0 to 5.1 (which is currently blocked) in testing builds in order to unblock QE testing
  • BZ - 2059452 - [rgw-multisite][sync run] Segfault in RGWGC::send_chain()
  • BZ - 2060519 - [ceph-dashboard] Host Overall performance AVG Disk utilization shows value N/A always
  • BZ - 2062627 - Grafana deployment fails with default beta build.
  • BZ - 2063702 - [RFE] Adding a check and blocker for upgrading only RGW multisite clusters from 4.x to 5.1
  • BZ - 2064327 - [5.1][LC][rgw-resharding]: rgw crashes while resharding and LC process happen in parallel.
  • BZ - 2069407 - We need to add verification on cluster bootstrap for Multisite configurations